首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1029篇
  免费   109篇
  国内免费   145篇
测绘学   329篇
大气科学   138篇
地球物理   153篇
地质学   219篇
海洋学   85篇
天文学   11篇
综合类   120篇
自然地理   228篇
  2024年   15篇
  2023年   61篇
  2022年   163篇
  2021年   179篇
  2020年   152篇
  2019年   98篇
  2018年   60篇
  2017年   66篇
  2016年   36篇
  2015年   26篇
  2014年   27篇
  2013年   54篇
  2012年   77篇
  2011年   28篇
  2010年   23篇
  2009年   19篇
  2008年   25篇
  2007年   30篇
  2006年   26篇
  2005年   16篇
  2004年   14篇
  2003年   10篇
  2002年   8篇
  2001年   14篇
  2000年   8篇
  1999年   11篇
  1998年   4篇
  1997年   10篇
  1996年   7篇
  1995年   5篇
  1994年   1篇
  1993年   2篇
  1992年   1篇
  1991年   3篇
  1988年   1篇
  1985年   1篇
  1984年   1篇
  1954年   1篇
排序方式: 共有1283条查询结果,搜索用时 21 毫秒
21.
作为近年来爆炸式发展的方法模型,机器学习为地质找矿提供了新的思维和研究方法。本文探讨矿产预测研究的理论方法体系,总结机器学习在矿产预测领域的特征信息提取和信息综合集成两个方面的应用现状,并讨论机器学习在矿产资源定量预测领域面临的训练样本稀少且不均衡、模型训练中缺乏不确定性评估、缺少反哺研究、方法选择等困难和挑战。进一步以闽西南马坑式铁矿为实例论述基于机器学习方法的矿产预测基本流程:(1)通过成矿系统研究建立成矿模型,确定矿床控矿要素;(2)通过勘查系统研究建立找矿模型,并为评价预测提供相关的勘查数据;(3)通过预测评价系统研究,建立预测模型,并提取预测要素;(4)利用机器学习模型对预测要素进行信息综合集成,获取成矿有利度图;(5)对预测性能和结果进行不确定性评估;(6)找矿靶区/成矿远景区圈定及资源量估算。最后,总结建立以地学大数据和地球系统理论为指导,以“地球系统-成矿系统-勘查系统-预测评价系统”为研究路线的基于地学大数据的矿产资源定量预测理论和方法体系的研究愿景。  相似文献   
22.
我国建立了包含海量数据的高质量的勘查地球化学数据库,为矿产勘查、环境评价和地质调查等提供了重要的数据支撑。如何高效处理勘查地球化学数据,并从中发掘和识别深层次信息一直是勘查地球化学学科研究的热点和前沿领域。本文在系统调研国内外学者过去十年发表的论著基础上,对勘查地球化学数据处理方法进行分析与对比,从勘查地球化学数据库建设、地球化学异常识别及其不确定性评价等方面概述了我国近十年来在该领域取得的主要研究进展,包括:(1)分形与多重分形模型由于考虑了地球化学空间模式的复杂性和尺度不变性,在全球范围内得到极大的发展和推广,我国学者引领了基于分形与多重分形的勘查地球化学数据处理;(2)机器学习和大数据思维开始在该领域启蒙,并迅速得到关注,正在成为研究热点和前沿领域,我国学者率先开展基于机器学习算法的勘查地球化学大数据挖掘研究;(3)我国学者需要进一步加强勘查地球化学数据缺失值处理以及成分数据闭合效应研究。今后该领域应进一步加强对弱缓地球化学异常识别、异常不确定性评价以及异常识别与其形成机理相结合等方面的研究。  相似文献   
23.
高时空分辨率的自然资源指标数据对大尺度自然资源动态观测与趋势评估至关重要。大数据时代下的海量多源数据为数据高效融合利用提供了可能。以重构汉江流域归一化植被指数(Normalized Difference Vegetation Index,NDVI)数据为例,搭建了PostgreSQL自然资源时空大数据处理底层架构,集成了数据级融合法、特征级融合法和决策级融合法,基于机器学习算法构建了一套面向自然资源信息提取的多源异构数据智能融合技术,实现了多源数据的高效利用与特征空间优选。同时,重构了2000—2019年汉江流域NDVI 1 km逐年数据集,全面反映了汉江流域植被动态变化。研究结果可为地球科学时空大数据的高效提取与模拟分析提供科学参考,为定量核算林草资源禀赋规模、探究生态系统时空演变规律提供一种更精准、更便捷的技术手段。  相似文献   
24.
In recent years,landslide susceptibility mapping has substantially improved with advances in machine learning.However,there are still challenges remain in landslide mapping due to the availability of limited inventory data.In this paper,a novel method that improves the performance of machine learning techniques is presented.The proposed method creates synthetic inventory data using Generative Adversarial Networks(GANs)for improving the prediction of landslides.In this research,landslide inventory data of 156 landslide locations were identified in Cameron Highlands,Malaysia,taken from previous projects the authors worked on.Elevation,slope,aspect,plan curvature,profile curvature,total curvature,lithology,land use and land cover(LULC),distance to the road,distance to the river,stream power index(SPI),sediment transport index(STI),terrain roughness index(TRI),topographic wetness index(TWI)and vegetation density are geo-environmental factors considered in this study based on suggestions from previous works on Cameron Highlands.To show the capability of GANs in improving landslide prediction models,this study tests the proposed GAN model with benchmark models namely Artificial Neural Network(ANN),Support Vector Machine(SVM),Decision Trees(DT),Random Forest(RF)and Bagging ensemble models with ANN and SVM models.These models were validated using the area under the receiver operating characteristic curve(AUROC).The DT,RF,SVM,ANN and Bagging ensemble could achieve the AUROC values of(0.90,0.94,0.86,0.69 and 0.82)for the training;and the AUROC of(0.76,0.81,0.85,0.72 and 0.75)for the test,subsequently.When using additional samples,the same models achieved the AUROC values of(0.92,0.94,0.88,0.75 and 0.84)for the training and(0.78,0.82,0.82,0.78 and 0.80)for the test,respectively.Using the additional samples improved the test accuracy of all the models except SVM.As a result,in data-scarce environments,this research showed that utilizing GANs to generate supplementary samples is promising because it can improve the predictive capability of common landslide prediction models.  相似文献   
25.
The selection of a suitable discretization method(DM)to discretize spatially continuous variables(SCVs)is critical in ML-based natural hazard susceptibility assessment.However,few studies start to consider the influence due to the selected DMs and how to efficiently select a suitable DM for each SCV.These issues were well addressed in this study.The information loss rate(ILR),an index based on the informa-tion entropy,seems can be used to select optimal DM for each SCV.However,the ILR fails to show the actual influence of discretization because such index only considers the total amount of information of the discretized variables departing from the original SCV.Facing this issue,we propose an index,infor-mation change rate(ICR),that focuses on the changed amount of information due to the discretization based on each cell,enabling the identification of the optimal DM.We develop a case study with Random Forest(training/testing ratio of 7:3)to assess flood susceptibility in Wanan County,China.The area under the curve-based and susceptibility maps-based approaches were presented to compare the ILR and ICR.The results show the ICR-based optimal DMs are more rational than the ILR-based ones in both cases.Moreover,we observed the ILR values are unnaturally small(<1%),whereas the ICR values are obviously more in line with general recognition(usually 10%-30%).The above results all demonstrate the superiority of the ICR.We consider this study fills up the existing research gaps,improving the ML-based natural hazard susceptibility assessments.  相似文献   
26.
One important step in binary modeling of environmental problems is the generation of absence-datasets that are traditionally generated by random sampling and can undermine the quality of outputs.To solve this problem,this study develops the Absence Point Generation(APG)toolbox which is a Python-based ArcGIS toolbox for automated construction of absence-datasets for geospatial studies.The APG employs a frequency ratio analysis of four commonly used and important driving factors such as altitude,slope degree,topographic wetness index,and distance from rivers,and considers the presence locations buffer and density layers to define the low potential or susceptibility zones where absence-datasets are gener-ated.To test the APG toolbox,we applied two benchmark algorithms of random forest(RF)and boosted regression trees(BRT)in a case study to investigate groundwater potential using three absence datasets i.e.,the APG,random,and selection of absence samples(SAS)toolbox.The BRT-APG and RF-APG had the area under receiver operating curve(AUC)values of 0.947 and 0.942,while BRT and RF had weaker per-formances with the SAS and Random datasets.This effect resulted in AUC improvements for BRT and RF by 7.2,and 9.7%from the Random dataset,and AUC improvements for BRT and RF by 6.1,and 5.4%from the SAS dataset,respectively.The APG also impacted the importance of the input factors and the pattern of the groundwater potential maps,which proves the importance of absence points in environmental bin-ary issues.The proposed APG toolbox could be easily applied in other environmental hazards such as landslides,floods,and gully erosion,and land subsidence.  相似文献   
27.
Flood management and adaptation are important elements in sustaining farming production in the Vietnamese Mekong Delta (VMD). While over the past decades hydraulic development introduced by the central government has substantially benefited the rural economy, it has simultaneously caused multiple barriers to rural adaptation. We investigate the relational practices (i.e., learning interactions) taking place within and across the flood management and adaptation boundaries from the perspective of social learning. We explore whether and how adaptive knowledge (i.e., experimental and experiential knowledge) derived from farmers’ everyday adaptation practices contributes to local flood management and adaptation policies in the selected areas. We collected data through nine focus groups with farmers and thirty-three interviews with government officials, environmental scientists, and farmers. Qualitative analysis suggests that such processes are largely shaped by the institutional context where the boundary is embedded. This study found that while the highly bureaucratic operation of flood management creates constraints for feedback, the more informal arrangements set in place at the local level provide flexible platforms conducive to open communication, collaborative learning, and exchange of knowledge among the different actors. This study highlights the pivotal role of shadow systems that provide space for establishing and maintaining informal interactions and relationships between social actors (e.g., interactions between farmers and extension officials) in stimulating and influencing, from the bottom-up, the emergence of adaptive knowledge about flood management and adaptation in a local context.  相似文献   
28.
该文将循环神经网络(recurrent neural network,RNN)应用于雷达临近预报。使用预测循环神经网络(predictive RNN)架构,利用雷达历史组合反射率因子建模,给出雷达组合反射率因子未来1 h的预报结果。预测循环神经网络的核心是在长短时记忆单元(long short-term memory,LSTM)中增加时空记忆模块,能够提取雷达回波不同尺度的空间特征,配合循环神经网络架构,可以有效解决反射率因子预测问题。北京大兴雷达和广州雷达长时间序列的独立检验结果和2个强对流天气个例检验结果表明:该方法和传统的基于交叉相关法的1 h雷达外推临近预报相比,在20 dBZ和30 dBZ检验项目内,临界成功指数(CSI)可以提升0.15~0.30,命中率(POD)提高0.15~0.25,虚警率(FAR)降低0.15~0.20,该方法对反射率因子强度变化有一定预报能力。  相似文献   
29.
人工智能在冰雹识别及临近预报中的初步应用   总被引:1,自引:0,他引:1       下载免费PDF全文
张文海  李磊 《气象学报》2019,77(2):282-291
基于广东10部S波段多普勒天气雷达的三维拼图资料,利用机器学习技术开发了一种冰雹识别和临近预报的人工智能算法。算法设计时以雷达回波反射率的垂直和水平扫描数据为基础训练集,将冰雹云的雷达反射率扫描数据作为正样本,将其他雷达反射率扫描数据作为负样本,通过贝叶斯分类法对正、负样本数据集进行机器学习,训练人工智能识别冰雹云内在规律的能力。训练时以广东省2008-2013和2015-2016年的数据作为训练集,使用了2014年广东省12次冰雹过程的数据做检验。对比检验的结果表明,人工智能法比传统的概念模型法击中率高9个百分点。研究结果表明了人工智能对冰雹这类非线性强天气过程具有较强的识别能力。   相似文献   
30.
Geophysical data sets are growing at an ever-increasing rate, requiring computationally efficient data selection(thinning)methods to preserve essential information. Satellites, such as Wind Sat, provide large data sets for assessing the accuracy and computational efficiency of data selection techniques. A new data thinning technique, based on support vector regression(SVR), is developed and tested. To manage large on-line satellite data streams, observations from Wind Sat are formed into subsets by Voronoi tessellation and then each is thinned by SVR(TSVR). Three experiments are performed. The first confirms the viability of TSVR for a relatively small sample, comparing it to several commonly used data thinning methods(random selection, averaging and Barnes filtering), producing a 10% thinning rate(90% data reduction), low mean absolute errors(MAE) and large correlations with the original data. A second experiment, using a larger dataset, shows TSVR retrievals with MAE < 1 m s-1and correlations 0.98. TSVR was an order of magnitude faster than the commonly used thinning methods. A third experiment applies a two-stage pipeline to TSVR, to accommodate online data. The pipeline subsets reconstruct the wind field with the same accuracy as the second experiment, is an order of magnitude faster than the nonpipeline TSVR. Therefore, pipeline TSVR is two orders of magnitude faster than commonly used thinning methods that ingest the entire data set. This study demonstrates that TSVR pipeline thinning is an accurate and computationally efficient alternative to commonly used data selection techniques.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号